
Utilizing Scientific Deep Learning to Accelerate Data-Consistent Inversion
Please login to view abstract download link
Stochastic inverse problems have been attracting increasing attention in recent years due to advances in data acquisition techniques which enable utilizing tremendous amounts of data to construct data-informed physics-based computational models. In particular, a recently developed approach, called data-consistent inversion (DCI), solves a specific class of stochastic inverse problems where one seeks an aleatoric characterization of model inputs over a population of assets/individuals [1]. Moreover, this approach has been recently utilized to construct population-informed priors to enhance the information gained in performing asset-specific Bayesian inference for digital twins [2]. At the same time, scientific machine learning methods have also become much more prevalent in computational science, largely due to their ability to learn and exploit low-dimensional structure in high-dimensional data. The subject of utilizing approximate models/maps in the context of DCI has been studied in extensive detail in [3,4]. In this presentation, we will describe some of our recent work on using data-driven (SciML) surrogate models to expedite the solution to both Bayesian and data-consistent inverse problems. On the theoretical side, we show that the universal approximation theorem justifies the utilization of such machine learning surrogate models to solve such problems. However, on the practical side, we demonstrate that the errors and uncertainties in these surrogate models can significantly impact the accuracy of the inferred probability distribution, and therefore on any subsequent predictions. Time permitting, we will also discuss a recently developed approach to quantify the uncertainty in the solution to the stochastic inverse problem due to the use of these approximate models.